Denoising Diffusion Probabilistic Models (DDPMs) are emerging in text-to-speech (TTS) synthesis because of their strong capability of generating high-fidelity samples. However, their iterative refinement process in high-dimensional data space results in slow inference speed, which restricts their application in real-time systems. Previous works have explored speeding up by minimizing the number of inference steps but at the cost of sample quality. In this work, to improve the inference speed for DDPM-based TTS model while achieving high sample quality, we propose ResGrad, a lightweight diffusion model which learns to refine the output spectrogram of an existing TTS model (e.g., FastSpeech 2) by predicting the residual between the model output and the corresponding ground-truth speech. ResGrad has several advantages: 1) Compare with other acceleration methods for DDPM which need to synthesize speech from scratch, ResGrad reduces the complexity of task by changing the generation target from ground-truth mel-spectrogram to the residual, resulting into a more lightweight model and thus a smaller real-time factor. 2) ResGrad is employed in the inference process of the existing TTS model in a plug-and-play way, without re-training this model. We verify ResGrad on the single-speaker dataset LJSpeech and two more challenging datasets with multiple speakers (LibriTTS) and high sampling rate (VCTK). Experimental results show that in comparison with other speed-up methods of DDPMs: 1) ResGrad achieves better sample quality with the same inference speed measured by real-time factor; 2) with similar speech quality, ResGrad synthesizes speech faster than baseline methods by more than 10 times. Audio samples are available at https://resgrad1.github.io/.
translated by 谷歌翻译
Existing training criteria in automatic speech recognition(ASR) permit the model to freely explore more than one time alignments between the feature and label sequences. In this paper, we use entropy to measure a model's uncertainty, i.e. how it chooses to distribute the probability mass over the set of allowed alignments. Furthermore, we evaluate the effect of entropy regularization in encouraging the model to distribute the probability mass only on a smaller subset of allowed alignments. Experiments show that entropy regularization enables a much simpler decoding method without sacrificing word error rate, and provides better time alignment quality.
translated by 谷歌翻译
Weakly supervised semantic segmentation (WSSS) with image-level labels is a challenging task in computer vision. Mainstream approaches follow a multi-stage framework and suffer from high training costs. In this paper, we explore the potential of Contrastive Language-Image Pre-training models (CLIP) to localize different categories with only image-level labels and without any further training. To efficiently generate high-quality segmentation masks from CLIP, we propose a novel framework called CLIP-ES for WSSS. Our framework improves all three stages of WSSS with special designs for CLIP: 1) We introduce the softmax function into GradCAM and exploit the zero-shot ability of CLIP to suppress the confusion caused by non-target classes and backgrounds. Meanwhile, to take full advantage of CLIP, we re-explore text inputs under the WSSS setting and customize two text-driven strategies: sharpness-based prompt selection and synonym fusion. 2) To simplify the stage of CAM refinement, we propose a real-time class-aware attention-based affinity (CAA) module based on the inherent multi-head self-attention (MHSA) in CLIP-ViTs. 3) When training the final segmentation model with the masks generated by CLIP, we introduced a confidence-guided loss (CGL) to mitigate noise and focus on confident regions. Our proposed framework dramatically reduces the cost of training for WSSS and shows the capability of localizing objects in CLIP. Our CLIP-ES achieves SOTA performance on Pascal VOC 2012 and MS COCO 2014 while only taking 10% time of previous methods for the pseudo mask generation. Code is available at https://github.com/linyq2117/CLIP-ES.
translated by 谷歌翻译
The role of mobile cameras increased dramatically over the past few years, leading to more and more research in automatic image quality enhancement and RAW photo processing. In this Mobile AI challenge, the target was to develop an efficient end-to-end AI-based image signal processing (ISP) pipeline replacing the standard mobile ISPs that can run on modern smartphone GPUs using TensorFlow Lite. The participants were provided with a large-scale Fujifilm UltraISP dataset consisting of thousands of paired photos captured with a normal mobile camera sensor and a professional 102MP medium-format FujiFilm GFX100 camera. The runtime of the resulting models was evaluated on the Snapdragon's 8 Gen 1 GPU that provides excellent acceleration results for the majority of common deep learning ops. The proposed solutions are compatible with all recent mobile GPUs, being able to process Full HD photos in less than 20-50 milliseconds while achieving high fidelity results. A detailed description of all models developed in this challenge is provided in this paper.
translated by 谷歌翻译
社会建议利用社会关系来增强建议的代表性学习。大多数社会推荐模型都将用户互动(协作领域)和社会关系(社会领域)的用户表示统一。但是,这种方法可能无法模拟用户在两个域中的异质行为模式,从而损害了用户表示的表现力。在这项工作中,为了解决这种局限性,我们为社会建议提出了一个新颖的截面对比度学习框架DCREC。更具体地说,我们建议从项目和社会域中学习分开的用户表示。此外,分离的对比度学习旨在在分散的用户表示之间进行社交建议之间的知识转移。各种现实世界数据集的全面实验证明了我们提出的模型的优势。
translated by 谷歌翻译
数据爆炸和模型尺寸的增加推动了大规模机器学习的显着进步,但也使模型训练时间耗时和模型存储变得困难。为了解决具有较高计算效率和设备限制的分布式模型培训设置中的上述问题,仍然存在两个主要困难。一方面,交换信息的沟通成本,例如,不同工人之间的随机梯度是分布式培训效率的关键瓶颈。另一方面,较少的参数模型容易用于存储和通信,但是损坏模型性能的风险。为了同时平衡通信成本,模型容量和模型性能,我们提出了量化的复合镜下降自适应亚基(QCMD Adagrad),并量化正规化双平均平均自适应亚级别(QRDA ADAGRAD)进行分布式培训。具体来说,我们探讨了梯度量化和稀疏模型的组合,以降低分布式培训中每次迭代的通信成本。构建了基于量化梯度的自适应学习率矩阵,以在沟通成本,准确性和模型稀疏性之间达到平衡。此外,从理论上讲,我们发现大量化误差会引起额外的噪声,从而影响模型的收敛性和稀疏性。因此,在QCMD Adagrad和QRDA Adagrad中采用了具有相对较小误差的阈值量化策略,以提高信噪比并保留模型的稀疏性。理论分析和经验结果都证明了所提出的算法的功效和效率。
translated by 谷歌翻译
快速对抗训练(脂肪)有效地提高了标准对抗训练(SAT)的效率。然而,初始脂肪遇到灾难性的过度拟合,即,对抗性攻击的稳健精度突然并大大减少。尽管有几种脂肪变体毫不费力地防止过度拟合,但他们牺牲了很多计算成本。在本文中,我们探讨了SAT和FAT的训练过程之间的差异,并观察到,对抗性实例(AES)脂肪的攻击成功率在后期训练阶段逐渐变得更糟,从而导致过度拟合。 AE是通过零或随机初始化的快速梯度标志方法(FGSM)生成的。根据观察结果,我们提出了一种先前的FGSM初始化方法,以避免在研究多种初始化策略后避免过度适应,从而在整个训练过程中提高了AE的质量。初始化是通过利用历史上生成的AE而没有额外计算成本而形成的。我们进一步为提出的初始化方法提供了理论分析。我们还基于先前的初始化,即当前生成的扰动不应过多地偏离先前引导的初始化,因此我们还提出了一个简单而有效的正规化程序。正常化器同时采用历史和当前的对抗性扰动来指导模型学习。在四个数据集上进行的评估表明,所提出的方法可以防止灾难性过度拟合和优于最先进的脂肪方法。该代码在https://github.com/jiaxiaojunqaq/fgsm-pgi上发布。
translated by 谷歌翻译
在非欧几里得空间上卷积成功之后,在有关图形的各种任务上也验证了相应的合并方法。但是,由于固定的压缩配额和逐步合并设计,这些层次池方法仍然遭受局部结构损害和次优问题的困扰。在这项工作的启发下,我们提出了一种层次的合并方法,即SEP解决这两个问题。具体而言,在不分配特定层的压缩配额的情况下,全局优化算法旨在生成一次集群分配矩阵以一次汇总。然后,我们介绍了在环和网格合成图的重建中先前方法中局部结构损害的例证。除SEP外,我​​们还将分别设计两个分类模型,分别用于图形分类和节点分类。结果表明,SEP在图形分类基准上优于最先进的图形合并方法,并在节点分类上获得了卓越的性能。
translated by 谷歌翻译
通过强迫连续重量的最多n非零,最近的N:M网络稀疏性因其两个有吸引力的优势而受到越来越多的关注:1)高稀疏性的有希望的表现。 2)对NVIDIA A100 GPU的显着加速。最近的研究需要昂贵的训练阶段或重型梯度计算。在本文中,我们表明N:M学习可以自然地将其描述为一个组合问题,该问题可以在有限的集合中寻找最佳组合候选者。由这种特征激励,我们以有效的分裂方式解决了n:m的稀疏性。首先,我们将重量向量分为$ c _ {\ text {m}}}^{\ text {n}} $组合s子集的固定大小N。然后,我们通过分配每个组合来征服组合问题,一个可学习的分数是共同优化了其关联权重。我们证明,引入的评分机制可以很好地模拟组合子集之间的相对重要性。通过逐渐去除低得分的子集,可以在正常训练阶段有效地优化N:M细粒稀疏性。全面的实验表明,我们的学习最佳组合(LBC)的表现始终如一,始终如一地比现成的N:m稀疏方法更好。我们的代码在\ url {https://github.com/zyxxmu/lbc}上发布。
translated by 谷歌翻译
轻巧的超级分辨率(SR)模型因其在移动设备中的可用性而受到了极大的关注。许多努力采用网络量化来压缩SR模型。但是,当将SR模型定量为具有低成本层量化的超低精度(例如2位和3位)时,这些方法会遭受严重的性能降解。在本文中,我们确定性能下降来自于层的对称量化器与SR模型中高度不对称的激活分布之间的矛盾。这种差异导致量化水平上的浪费或重建图像中的细节损失。因此,我们提出了一种新型的激活量化器,称为动态双训练边界(DDTB),以适应激活的不对称性。具体而言,DDTB在:1)具有可训练上限和下限的层量化器中,以应对高度不对称的激活。 2)一个动态栅极控制器,可在运行时自适应地调整上和下限,以克服不同样品上的急剧变化的激活范围。为了减少额外的开销,将动态栅极控制器定量到2位,并仅应用于部分的一部分SR网络根据引入的动态强度。广泛的实验表明,我们的DDTB在超低精度方面表现出显着的性能提高。例如,当将EDSR量化为2位并将输出图像扩展为X4时,我们的DDTB在Urban100基准测试基准上实现了0.70dB PSNR的增加。代码位于\ url {https://github.com/zysxmu/ddtb}。
translated by 谷歌翻译